skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Yu, Tingting"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Bug report reproduction is a crucial but time-consuming task to be carried out during mobile app maintenance. To accelerate this process, researchers have developed automated techniques for reproducing mobile app bug reports. However, due to the lack of an effective mechanism to recognize different buggy behaviors described in the report, existing work is limited to reproducing crash bug reports, or requires developers to manually analyze execution traces to determine if a bug was successfully reproduced. To address this limitation, we introduce a novel technique to automatically identify and extract the buggy behavior from the bug report and detect it during the automated reproduction process. To accommodate various buggy behaviors of mobile app bugs, we conducted an empirical study and created a standardized representation for expressing the bug behavior identified from our study. Given a report, our approach first transforms the documented buggy behavior into this standardized representation, then matches it against real-time device and UI information during the reproduction to recognize the bug. Our empirical evaluation demonstrated that our approach achieved over 90% precision and recall in generating the standardized representation of buggy behaviors. It correctly identified bugs in 83% of the bug reports and enhanced existing reproduction techniques, allowing them to reproduce four times more bug reports. 
    more » « less
    Free, publicly-accessible full text available June 19, 2026
  2. Free, publicly-accessible full text available April 28, 2026
  3. Free, publicly-accessible full text available April 1, 2026
  4. The large demand of mobile devices creates significant concerns about the quality of mobile applications (apps). Developers need to guarantee the quality of mobile apps before it is released to the market. There have been many approaches using different strategies to test the GUI of mobile apps. However, they still need improvement due to their limited effectiveness. In this article, we propose DinoDroid, an approach based on deep Q-networks to automate testing of Android apps. DinoDroid learns a behavior model from a set of existing apps and the learned model can be used to explore and generate tests for new apps. DinoDroid is able to capture the fine-grained details of GUI events (e.g., the content of GUI widgets) and use them as features that are fed into deep neural network, which acts as the agent to guide app exploration. DinoDroid automatically adapts the learned model during the exploration without the need of any modeling strategies or pre-defined rules. We conduct experiments on 64 open-source Android apps. The results showed that DinoDroid outperforms existing Android testing tools in terms of code coverage and bug detection. 
    more » « less
  5. Bug report reproduction is an important, but time-consuming task carried out during mobile app maintenance. To accelerate this task, current research has proposed automated reproduction techniques that rely on a guided dynamic exploration of the app to match bug report steps with UI events in a mobile app. However, these techniques struggle to find the correct match when the bug reports have missing or inaccurately described steps. To address these limitations, we propose a new bug report reproduction technique that uses an app’s UI model to perform a global search across all possible matches between steps and UI actions and identify the most likely match while accounting for the possibility of missing or inaccurate steps. To do this, our approach redefines the bug report reproduction process as a Markov model and finds the best paths through the model using a dynamic programming based technique. We conducted an empirical evaluation on 72 real-world bug reports. Our approach achieved a 94% reproduction rate on the total bug reports and a 93% reproduction rate on bug reports with missing steps, significantly outperforming the state-of-the-art approaches. Our approach was also more effective in finding the matches from the steps to UI events than the state-of-the-art approaches. 
    more » « less
  6. Continuous integration (CI) has become a popular method for automating code changes, testing, and software project delivery. However, sufficient testing prior to code submission is crucial to prevent build breaks. Additionally, testing must provide developers with quick feedback on code changes, which requires fast testing times. While regression test selection (RTS) has been studied to improve the cost-effectiveness of regression testing for lower-level tests (i.e., unit tests), it has not been applied to the testing of user interfaces (UI) in application domains such as mobile apps. UI testing at the UI level requires different techniques such as impact analysis and automated test execution. In this paper, we examine the use of RTS in CI settings for UI testing across various open-source mobile apps. Our analysis focuses on using Frequency Analysis to understand the need for RTS, Cost Analysis to evaluate the cost of impact analysis and test case selection algorithms, and Test Reuse Analysis to determine the reusability of UI test sequences for automation. The insights from this study will guide practitioners and researchers in developing advanced RTS techniques that can be adapted to CI environments for mobile apps. 
    more » « less
  7. Mocking frameworks provide convenient APIs, which create mock objects, manipulate their behavior, and verify their execution, for the purpose of isolating test dependencies in unit testing. This study contributes an in-depth empirical study of whether and how mocking frameworks are used in Apache projects. The key findings and insights of this study include: First, mocking frameworks are widely used in 66% of Apache Java projects, with Mockito, EasyMock, and PowerMock being the top three most popular frameworks. Larger-scale and more recent projects tend to observe a stronger need to use mocking frameworks. This underscores the importance of mocking in practice and related future research. Second, mocking is overall practiced quite selectively in software projects—not all test files use mocking, nor all dependencies of a test target are mocked. It calls for more future research to gain a more systematic understanding of when and what to mock to provide formal guidance to practitioners. On top of this, the intensity of mocking in different projects shows different trends in the projects’ evolution history—implying the compound effects of various factors, such as the pace of a project’s growth, the available resources, time pressure, and priority, etc. This points to an important future research direction in facilitating best mocking practices in software evolution. Furthermore, we revealed the most frequently used APIs in the three most popular frameworks, organized based on the function types. The top five APIs in each functional type of the three mocking frameworks usually take the majority (78% to 100%) of usage in Apache projects. This indicates that developers can focus on these APIs to quickly learn the common usage of these mocking frameworks. We further investigated informal methods of mocking, which do not rely on any mocking framework. These informal mocking methods point to potential sub-optimal mocking practices that could be improved, as well as limitations of existing mocking frameworks. Finally, we conducted a developer survey to collect additional insights regarding the above analysis based on their experience, which complements our analysis based on repository mining. Overall, this study offers practitioners profound empirical knowledge of how mocking frameworks are used in practice and sheds light on future research directions to enhancing mocking in practice. 
    more » « less